Search results for "Sound synthesi"
showing 9 items of 9 documents
Soundscape design through evolutionary engines
2008
Abstract Two implementations of an Evolutionary Sound Synthesis method using the Interaural Time Difference (ITD) and psychoacoustic descriptors are presented here as a way to develop criteria for fitness evaluation. We also explore a relationship between adaptive sound evolution and three soundscape characteristics: keysounds, key-signals and sound-marks. Sonic Localization Field is defined using a sound attenuation factor and ITD azimuth angle, respectively (Ii, Li). These pairs are used to build Spatial Sound Genotypes (SSG) and they are extracted from a waveform population set. An explanation on how our model was initially written in MATLAB is followed by a recent Pure Data (Pd) impleme…
Reverberation still in business: Thickening and propagating micro-textures in physics-based sound modeling
2015
Artificial reverberation is usually introduced, as a digital audio effect, to give a sense of enclosing architectural space. In this paper we argue about the effectiveness and usefulness of diffusive reverberators in physically-inspired sound synthesis. Examples are given for the synthesis of textural sounds, as they emerge from solid mechanical interactions, as well as from aerodynamic and liquid phenomena.
Embodied sound design
2018
Abstract Embodied sound design is a process of sound creation that involves the designer’s vocal apparatus and gestures. The possibilities of vocal sketching were investigated by means of an art installation. An artist–designer interpreted several vocal self-portraits and rendered the corresponding synthetic sketches by using physics-based and concatenative sound synthesis. Both synthesis techniques afforded a broad range of artificial sound objects, from concrete to abstract, all derived from natural vocalisations. The vocal-to-synthetic transformation process was then automated in SEeD, a tool allowing to set and play interactively with physics- or corpus-based sound models. The voice-dri…
Numerical methods for a nonlinear impact model: A comparative study with closed-form corrections
2011
A physically based impact model-already known and exploited in the field of sound synthesis-is studied using both analytical tools and numerical simulations. It is shown that the Hamiltonian of a physical system composed of a mass impacting on a wall can be expressed analytically as a function of the mass velocity during contact. Moreover, an efficient and accurate approximation for the mass outbound velocity is presented, which allows to estimate the Hamiltonian at the end of the contact. Analytical results are then compared to numerical simulations obtained by discretizing the system with several numerical methods. It is shown that, for some regions of the parameter space, the trajectorie…
Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony
2009
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially fo…
The Sound Design Toolkit
2017
The Sound Design Toolkit is a collection of physically informed sound synthesis models, specifically designed for practice and research in Sonic Interaction Design. The collection is based on a hierarchical, perceptually founded taxonomy of everyday sound events, and implemented by procedural audio algorithms which emphasize the role of sound as a process rather than a product. The models are intuitive to control – and the resulting sounds easy to predict – as they rely on basic everyday listening experience. Physical descriptions of sound events are intentionally simplified to emphasize the most perceptually relevant timbral features, and to reduce computational requirements as well. Keywo…
Sketching sonic interactions by imitation-driven sound synthesis
2016
Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think a…
Action expertise reduces brain activity for audiovisual matching actions: An fMRI study with expert drummers
2011
When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound in…
Non-speech voice for sonic interaction: a catalogue
2016
This paper surveys the uses of non-speech voice as an interaction modality within sonic applications. Three main contexts of use have been identified: sound retrieval, sound synthesis and control, and sound design. An overview of different choices and techniques regarding the style of interaction, the selection of vocal features and their mapping to sound features or controls is here displayed. A comprehensive collection of examples instantiates the use of non-speech voice in actual tools for sonic interaction. It is pointed out that while voice-based techniques are already being used proficiently in sound retrieval and sound synthesis, their use in sound design is still at an exploratory p…